288 research outputs found

    Calculating confidence intervals for impact numbers

    Get PDF
    BACKGROUND: Standard effect measures such as risk difference and attributable risk are frequently used in epidemiological studies and public health research to describe the effect of exposures. Recently, so-called impact numbers have been proposed, which express the population impact of exposures in form of specific person or case numbers. To describe estimation uncertainty, it is necessary to calculate confidence intervals for these new effect measures. In this paper, we present methods to calculate confidence intervals for the new impact numbers in the situation of cohort studies. METHODS: Beside the exposure impact number (EIN), which is equivalent to the well-known number needed to treat (NNT), two other impact numbers are considered: the case impact number (CIN) and the exposed cases impact number (ECIN), which describe the number of cases (CIN) and the number of exposed cases (ECIN) with an outcome among whom one case is attributable to the exposure. The CIN and ECIN represent reciprocals of the population attributable risk (PAR) and the attributable fraction among the exposed (AF(e)), respectively. Thus, confidence intervals for these impact numbers can be calculated by inverting and exchanging the confidence limits of the PAR and AF(e). EXAMPLES: We considered a British and a Japanese cohort study that investigated the association between smoking and death from coronary heart disease (CHD) and between smoking and stroke, respectively. We used the reported death and disease rates and calculated impact numbers with corresponding 95% confidence intervals. In the British study, the CIN was 6.46, i.e. on average, of any 6 to 7 persons who died of CHD, one case was attributable to smoking with corresponding 95% confidence interval of [3.84, 20.36]. For the exposed cases, the results of ECIN = 2.64 with 95% confidence interval [1.76, 5.29] were obtained. In the Japanese study, the CIN was 6.67, i.e. on average, of the 6 to 7 persons who had a stroke, one case was attributable to smoking with corresponding 95% confidence interval of [3.80, 27.27]. For the exposed cases, the results of ECIN = 4.89 with 95% confidence interval of [2.86, 16.67] were obtained. CONCLUSION: The consideration of impact numbers in epidemiological analyses provides additional information and helps the interpretation of study results, e.g. in public health research. In practical applications, it is necessary to describe estimation uncertainty. We have shown that the calculation of confidence intervals for the new impact numbers is possible by means of known methods for attributable risk measures. Therefore, estimated impact numbers should always be complemented by appropriate confidence intervals

    LDL-cholesterol lowering effect of a generic product of simvastatin compared to simvastatin (Zocor™) in Thai hypercholesterolemic subjects – a randomized crossover study, the first report from Thailand

    Get PDF
    BACKGROUND: It is commonly agreed that people with a high blood LDL-cholesterol will have a higher risk of coronary artery disease (CAD) than people with low blood LDL-cholesterol. Due to the increasingly high costs of medication in Thailand, the government has set up several measures to combat the problem. One of such strategies is to promote the utilization of locally manufactured drug products, especially those contained in the National Drug List. Simvastatin, an HMG-CoA reductase inhibitor, is listed as an essential drug for the treatment of hypercholesterolemia. Here, we reported the study on the LDL-cholesterol-lowering effect of a generic simvastatin product in comparison with the Zocor(©), in 43 healthy thai volunteers. METHOD: The generic product tested was Eucor(©), locally manufactured by Greater Pharma Ltd., Part, Thailand, and the reference product was Zocor(©) (Merck Sharp & Dohme, USA). The two products were administered as 10-mg single oral doses in a two-period crossover design. After drug administration, serial blood samples were collected every 4 weeks for 16 weeks. The major parameter monitored in this study was blood LDL-cholesterol. RESULT: After taking the drugs for the first 8 weeks, no statistically significant difference was dedected in blood LDL-cholesterol between the first (Zocor(©)-treated) and the second (Eucor(©)-treated) groups. After crossover and taking drugs for further 8 weeks, a similar result was obtained, i.e., no significant difference in blood LDL-cholesterol between the first (Eucor(©)-treated) and the second (Zocor(©)-treated) groups was observed. Upon completion of the 16-week study, there was also no statisticaly significant difference in the changes of all tested blood parameters between the two products (randomized block ANOVA, N = 37). Only minor side effects, mainly dizziness and nausea, were observed in both products. CONCLUSION: Our study demonstrated no significant differences in the therapeutic effect and safety between the generic and original simvastatin products

    Generation of photovoltage in graphene on a femtosecond time scale through efficient carrier heating

    Get PDF
    Graphene is a promising material for ultrafast and broadband photodetection. Earlier studies addressed the general operation of graphene-based photo-thermoelectric devices, and the switching speed, which is limited by the charge carrier cooling time, on the order of picoseconds. However, the generation of the photovoltage could occur at a much faster time scale, as it is associated with the carrier heating time. Here, we measure the photovoltage generation time and find it to be faster than 50 femtoseconds. As a proof-of-principle application of this ultrafast photodetector, we use graphene to directly measure, electrically, the pulse duration of a sub-50 femtosecond laser pulse. The observation that carrier heating is ultrafast suggests that energy from absorbed photons can be efficiently transferred to carrier heat. To study this, we examine the spectral response and find a constant spectral responsivity between 500 and 1500 nm. This is consistent with efficient electron heating. These results are promising for ultrafast femtosecond and broadband photodetector applications.Comment: 6 pages, 4 figure

    How managers can build trust in strategic alliances: a meta-analysis on the central trust-building mechanisms

    Get PDF
    Trust is an important driver of superior alliance performance. Alliance managers are influential in this regard because trust requires active involvement, commitment and the dedicated support of the key actors involved in the strategic alliance. Despite the importance of trust for explaining alliance performance, little effort has been made to systematically investigate the mechanisms that managers can use to purposefully create trust in strategic alliances. We use Parkhe’s (1998b) theoretical framework to derive nine hypotheses that distinguish between process-based, characteristic-based and institutional-based trust-building mechanisms. Our meta-analysis of 64 empirical studies shows that trust is strongly related to alliance performance. Process-based mechanisms are more important for building trust than characteristic- and institutional-based mechanisms. The effects of prior ties and asset specificity are not as strong as expected and the impact of safeguards on trust is not well understood. Overall, theoretical trust research has outpaced empirical research by far and promising opportunities for future empirical research exist

    Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model.</p> <p>Methods and Results</p> <p>U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories (<18, 18–49, 50–64, 65+). Bayesian parameter estimation was performed using Markov Chain Monte Carlo methods. For the eleven year study period, a total of 260,814 (95% CI: 201,011–290,556) deaths was attributed to influenza, corresponding to an annual average of 23,710, or 0.91% of all deaths.</p> <p>Conclusion</p> <p>Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.</p

    A Comparison of Three Methods of Mendelian Randomization when the Genetic Instrument, the Risk Factor and the Outcome Are All Binary

    Get PDF
    The method of instrumental variable (referred to as Mendelian randomization when the instrument is a genetic variant) has been initially developed to infer on a causal effect of a risk factor on some outcome of interest in a linear model. Adapting this method to nonlinear models, however, is known to be problematic. In this paper, we consider the simple case when the genetic instrument, the risk factor, and the outcome are all binary. We compare via simulations the usual two-stages estimate of a causal odds-ratio and its adjusted version with a recently proposed estimate in the context of a clinical trial with noncompliance. In contrast to the former two, we confirm that the latter is (under some conditions) a valid estimate of a causal odds-ratio defined in the subpopulation of compliers, and we propose its use in the context of Mendelian randomization. By analogy with a clinical trial with noncompliance, compliers are those individuals for whom the presence/absence of the risk factor X is determined by the presence/absence of the genetic variant Z (i.e., for whom we would observe X = Z whatever the alleles randomly received at conception). We also recall and illustrate the huge variability of instrumental variable estimates when the instrument is weak (i.e., with a low percentage of compliers, as is typically the case with genetic instruments for which this proportion is frequently smaller than 10%) where the inter-quartile range of our simulated estimates was up to 18 times higher compared to a conventional (e.g., intention-to-treat) approach. We thus conclude that the need to find stronger instruments is probably as important as the need to develop a methodology allowing to consistently estimate a causal odds-ratio

    Sample Size under Inverse Negative Binomial Group Testing for Accuracy in Parameter Estimation

    Get PDF
    Background:The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence Methodology/Principal Findings: This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (rm), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2. Conclusions: The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (v), with a probability of c. With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools

    Uncovering the effect of low-frequency static magnetic field on tendon-derived cells: from mechanosensing to tenogenesis

    Get PDF
    Magnetotherapy has been receiving increased attention as an attractive strategy for modulating cell physiology directly at the site of injury, thereby providing the medical community with a safe and non- invasive therapy. Yet, how magnetic eld in uences tendon cells both at the cellular and molecular levels remains unclear. Thus, the in uence of a low-frequency static magnetic eld (2 Hz, 350 mT) on human tendon-derived cells was studied using di erent exposure times (4 and 8 h; short-term studies) and di erent regimens of exposure to an 8h-period of magnetic stimulation (continuous, every 24 h or every 48 h; long-term studies). Herein, 8 h stimulation in short-term studies signi cantly upregulated the expression of tendon-associated genes SCX, COL1A1, TNC and DCN (p < 0.05) and altered intracellular Ca2+ levels (p < 0.05). Additionally, every 24 h regimen of stimulation signi cantly upregulated COL1A1, COL3A1 and TNC at day 14 in comparison to control (p < 0.05), whereas continuous exposure di erentially regulated the release of the immunomodulatory cytokines IL-1β and IL-10 (p < 0.001) but only at day 7 in comparison to controls. Altogether, these results provide new insights on how low-frequency static magnetic eld ne-tune the behaviour of tendon cells according to the magnetic settings used, which we foresee to represent an interesting candidate to guide tendon regeneration.info:eu-repo/semantics/publishedVersio

    Online detection and quantification of epidemics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Time series data are increasingly available in health care, especially for the purpose of disease surveillance. The analysis of such data has long used periodic regression models to detect outbreaks and estimate epidemic burdens. However, implementation of the method may be difficult due to lack of statistical expertise. No dedicated tool is available to perform and guide analyses.</p> <p>Results</p> <p>We developed an online computer application allowing analysis of epidemiologic time series. The system is available online at <url>http://www.u707.jussieu.fr/periodic_regression/</url>. The data is assumed to consist of a periodic baseline level and irregularly occurring epidemics. The program allows estimating the periodic baseline level and associated upper forecast limit. The latter defines a threshold for epidemic detection. The burden of an epidemic is defined as the cumulated signal in excess of the baseline estimate. The user is guided through the necessary choices for analysis. We illustrate the usage of the online epidemic analysis tool with two examples: the retrospective detection and quantification of excess pneumonia and influenza (P&I) mortality, and the prospective surveillance of gastrointestinal disease (diarrhoea).</p> <p>Conclusion</p> <p>The online application allows easy detection of special events in an epidemiologic time series and quantification of excess mortality/morbidity as a change from baseline. It should be a valuable tool for field and public health practitioners.</p

    The Human Operculo-Insular Cortex Is Pain-Preferentially but Not Pain-Exclusively Activated by Trigeminal and Olfactory Stimuli

    Get PDF
    Increasing evidence about the central nervous representation of pain in the brain suggests that the operculo-insular cortex is a crucial part of the pain matrix. The pain-specificity of a brain region may be tested by administering nociceptive stimuli while controlling for unspecific activations by administering non-nociceptive stimuli. We applied this paradigm to nasal chemosensation, delivering trigeminal or olfactory stimuli, to verify the pain-specificity of the operculo-insular cortex. In detail, brain activations due to intranasal stimulation induced by non-nociceptive olfactory stimuli of hydrogen sulfide (5 ppm) or vanillin (0.8 ppm) were used to mask brain activations due to somatosensory, clearly nociceptive trigeminal stimulations with gaseous carbon dioxide (75% v/v). Functional magnetic resonance (fMRI) images were recorded from 12 healthy volunteers in a 3T head scanner during stimulus administration using an event-related design. We found that significantly more activations following nociceptive than non-nociceptive stimuli were localized bilaterally in two restricted clusters in the brain containing the primary and secondary somatosensory areas and the insular cortices consistent with the operculo-insular cortex. However, these activations completely disappeared when eliminating activations associated with the administration of olfactory stimuli, which were small but measurable. While the present experiments verify that the operculo-insular cortex plays a role in the processing of nociceptive input, they also show that it is not a pain-exclusive brain region and allow, in the experimental context, for the interpretation that the operculo-insular cortex splay a major role in the detection of and responding to salient events, whether or not these events are nociceptive or painful
    corecore